Analysis and Detection of Web Spam by Means of Web Content

نویسندگان

  • Víctor M. Prieto
  • Manuel Álvarez
  • Rafael López-García
  • Fidel Cacheda
چکیده

Web Spam is one of the main difficulties that crawlers have to overcome. According to Gyöngyi and Garcia-Molina it is defined as “any deliberate human action that is meant to trigger an unjustifiably favourable relevance or importance of some web pages considering the pages’ true value”. There are several studies on characterising and detecting Web Spam pages. However, none of them deals with all the possible kinds of Web Spam. This paper shows an analysis of different kinds of Web Spam pages and identifies new elements that characterise it. Taking them into account, we propose a new Web Spam detection system called SAAD, which is based on a set of heuristics and their use in a C4.5 classifier. Its results are also improved by means of Bagging and Boosting techniques. We have also tested our system in some well-known Web Spam datasets and we have found it to be very effective.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Analyzing new features of infected web content in detection of malicious web pages

Recent improvements in web standards and technologies enable the attackers to hide and obfuscate infectious codes with new methods and thus escaping the security filters. In this paper, we study the application of machine learning techniques in detecting malicious web pages. In order to detect malicious web pages, we propose and analyze a novel set of features including HTML, JavaScript (jQuery...

متن کامل

Link Spam Detection based on DBSpamClust with Fuzzy C-means Clustering

This Search engine became omnipresent means for ingoing to the web. Spamming Search engine is the technique to deceiving the ranking in search engine and it inflates the ranking. Web spammers have taken advantage of the vulnerability of link based ranking algorithms by creating many artificial references or links in order to acquire higher-than-deserved ranking n search engines' results. Link b...

متن کامل

A structural, content-similarity measure for detecting spam documents on the web

Purpose The Web provides its users with abundant information. Unfortunately, when a Web search is performed, both users and search engines must deal with an annoying problem: the presence of spam documents that are ranked among legitimate ones. The mixed results downgrade the performance of search engines and frustrate users who are required to filter out useless information. To improve the qua...

متن کامل

Using Semantic Analysis to Classify Search Engine Spam

Search engines have tried many techniques to filter out these spam pages before they can appear on the query results page. In Section 2 we present a collection of current methods that are being used to combat spam. We introduce a new approach to spam detection in Section 3 that uses semantic analysis of textual content as a means of detecting spam. This new approach uses a series of content ana...

متن کامل

Identifying Spam Web Pages Based on Content Similarity

The Web provides its users with abundant information. Unfortunately, when a Web search is performed, both users and search engines are faced with an annoying problem: the presence of misleading Web pages, i.e., spam Web pages, that are ranked among legitimate Web pages. The mixed results downgrade the performance of search engines and frustrate users who are required to filter out useless infor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012